Skin Cancer Detection using Deep Learning
Rajarajeswari. S1, J. Prassanna1*, Abdul Quadir Md1, Christy Jackson J1,
Shivam Sharma1, B. Rajesh2
1School of Computer Science and Engineering (SCOPE), Vellore Institute of Technology, Chennai
2Department of Mathematics, University College of Engineering, Pattukkottai, 614701, India
*Corresponding Author E-mail: prassanna.j@vit.ac.in
ABSTRACT:
Introduction: The identification and monitoring of benign moles and skin cancers leads to a challenging task because of the usual standard significant skin patches. Actually, the skin lesions vary very little in their look and only limited amount of information is available. There are seven fundamental types of skin cancer like Basal Cell Carcinoma (BCC), Melanoma and Squamous Cell Carcinoma (SCC) whereas Melanoma is the highly risky which has low survival rate. Objective: This work classifies skin lesions with the help of Convolution Neural Network and the images are trained end-to-end. A dataset comprised of 10000 clinical images were trained using Convolution Neural Network (CNN). Materials and Methods: The skin cancer identification process is generally separated into two basic components, image pre-processing which includes classification of images and removing the duplicate images and sharpening, which resizes the skin image. This work discusses a methodology to segment the high-level skin lesion and identification of malignancy more accurately with the help of deep learning: 1) Construction of a neural network, which detects the edge of a huge lesion accurately; 2) Designing model that can run on mobile phones. The model designed a transfer learning which is based deep on neural network and the fine turning that supports to attain high prediction accuracy. Results: The dataset comprises of a total of 10,000 images stored in two folders. The information about the data is stored in a data frame. Total 10000 dermoscopic images contains 374 melanoma images, 254 seborrheic keratosis images and 1372 nevus images. Using transfer learning validation loss, Top-2 accuracy and Top-3 accuracy have been calculated. The result has been compared with the different models. Conclusions: The proposed system can categorize healthy skin lesions, eczema, acne, malignant and benign skin lesions. The proposed work investigates the attributes acquired by the deep convolutional neural network. The attributes are extracted and the datasets were divided into seven different categories. Based on that categories the data was trained and validated. Based on the calculation the validation loss, top-2 accuracy, top-3 accuracy was calculated.
KEYWORDS: Neural network, Skin cancer, MobileNet, Deep learning, Machine Learning, skin_recnn, skin_segnn).
INTRODUCTION:
In recent decades, the computer revolution has expanded the use of computers in all fields. The era of keeping massive records for data storage has come to an end. In practically every field, computers have become the most widely utilised instrument. It has made life easier for people. Computers have also made research much easier.
Biomedical has presented advanced medical devices for the diagnosis and prevention uses. This consist of diagnostic test kits, antibodies, Vaccines and radiolabelled biological therapeutics which is used for imaging and analysis purpose.1,2,3
It has played a significant part in upgrading the issues relating to human health since it has the flexibility to reduce global health disparities by providing challenging technologies. With the advancement of technology, many types of cancers can now be identified, and they have become a fairly frequent disease. The most researched disease is skin cancer. Pollution has increased as a result of the increased usage of cosmetics.
Skin cancer stages:
As soon as the disease is identified, the next step is to determine the cancer's stage. Various characteristics, including as thickness, depth of penetration, and the extent to which the melanoma has spread, can help define the cancer's stage. Patients are treated based on the stage that has been diagnosed.
The early stages of melanoma (Stage 0 and Stage 1), which are the beginning stages of skin cancer, are insular. Cancers in situ (stage 0) are non-invasive and have not penetrated the outer layer of the skin (the epidermis). In the stages I of skin tumour development, which affect the dermis beneath the epidermis, the tumours are small and lack distinguishing features, for example, ulceration that pose them in jeopardy of spreading (metastasizing) to nearby lymph hubs or farther.4,5,6
Cancers in stage II are larger (often 1 mm thick or more), have other features such as ulceration, that put them at a high risk of spreading to nearby lymph nodes or further away from the primary tumour. Transitional or "high-chance" melanomas are what they're called. Melanomas that have progressed to Stages III and IV have spread to many regions of the body. Within each level, there are also subdivisions.
Machine learning:
Machine learning is not limited to a single field; it encompasses a wide range of topics and is continually expanding. Machine learning's entire goal is to programme in such a way that the computer can accomplish tasks without having to explicitly programme each time. A computer learns from the user's previous inputs, notes patterns, and improves with practise. When information is presented to a computer, and The method by which the machine learns is referred to as training, and the output is referred to as model, and machine learning is the process by which the machine learns with the help of an algorithm. With the support of its expertise, the learner's goal is to generalise. Generalization refers to a machine's ability to operate precisely on previously unseen or unknown data.
With the invention of automated vehicles, successful web search, practical speech recognition, and a massive knowledge of the human genome in the last decade, machine learning has astounded us. Today, machine learning is so widely used that you may unwittingly utilize it most of the time.
The use of machine learning in the health-care profession is sweeping the industry. Machine learning in medicine has recently gotten a lot of attention. More information frequently generates better outcomes when it comes to machine adaptation—and the human services sector is situated on an information goldmine. Calculations can provide an immediate benefit to disciplines with repeatable methods. Those with large image databases, such as radiology, cardiology, and pathology, are also good candidates. Machine learning may be programmed to analyse images, identify anomalies, and indicate to areas that require attention, thereby improving the precision of each of these activities.
The use of machine learning for categorising medical images has been shown to be effective in the past for identifying numerous diseases. Mobile and medical device machine learning convergence is advancing rapidly in many medical fields. mHealth is believed as one among the best transformative handlers for pervasive health information distribution of health applications. People's lives are being transformed at an unanticipated rate by machine-learning-based mHealth products. It takes an enormous amount of data to train a categorization model, and the processing and storage of that data creates significant system demands disputes for mobile products, according to the study.
Sharen et al7 proposed a neural network to detect, diagnose, and classify DCM and HCM disorders in an efficient and automatic manner. Echocardiogram footage or frame is pre-processed to reduce noise, with the median filter outperforming with strong PSNR and low MSE. FCM clustering is used to segment the pre-processed image, the statistical features such as Gray Level Difference statistical features are retrieved. Using a Neural Network classifier, the retrieved features are evaluated and categorised achieving 90% of accuracy. Patient data in the form of images and Electronic Patient Records (EPR) were turned into shadow virtual images by Anbarasi et al8. It is possible to hide EPR in a medical image by using DNA-based method Huffman encoding compresses the DNA encoded secret image before securely sharing it into ghost images using Shamir secret sharing. Sharon et al21 The Gaussian filter is used to pre-process the echocardiogram frames and DPSO-FCM based clustering is used to segment the pre-processed frame. BPNN, Naďve Bayes, SVM and Random Forest classifier is used to classify the selected features achieving an accuracy of 90% by SVM.
Jawahar et al10 extracted features using LBP from COVIF-19 images. classifiers namely Random Forest (RF), k-Nearest Neighbour (kNN), Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), Linear Regression (LR), Multi-layer perceptron neural network (MLP) and Support Vector Machine (SVM) classified the features achieving an accuracy of 77.7% by random forest. Jawahar et al11 performed an analysis on the diabetic foot ulcer images achieving a good accuracy using k-means clustering. Prasanna et al12 performed machine learning based model to enhance the health care system of different areas. Narendra et al13 presented various machine learning techniques used to analyse histology images to classify breast cancer. Arathi et al14 presented a deep learning based models to analyse the crack in the concrete images resulting a high accuracy.
This task provides an application for device inference whenever the classification model is pre-trained and stored on a mobile gadget. It categorizes new information and does not need to be expanded externally. This work performs the fundamental principles of this methodology which includes its assessment with the help of a case study, this work concentrates on skin malignancy which is one of the highly common human cancers.
Literature survey:
The ABCDEs are attributes frequently utilized by dermatologists to arrange melanomas. These incorporate Asymmetry, unpredictable Fringes, multiple or lopsided dissemination of Shading, or an enormous (more noteworthy than 6mm) Distance across, and finally, the Advancement of the moles (how the first four attributes change after some time) Other comparable standards or core values incorporate the 3-point agenda4, the 7-point agenda and Shading, Design, Evenness and Homogeneity (Money) However visual determination can be convoluted and can prompt abstract outcomes. Skin injury examination includes testing visual classification and division undertakings because of the wide changeability in the appearance of the malady and furthermore in light of the wide variety and assorted variety of patients' skin. PC vision and AI researchers have endeavoured melanoma classification and general skin illness investigation, and in this segment, we talk about a portion of these related past methodologies.
Kawahara et. al present a CNN engineering which they apply on the Dermofit Picture Library gave by the American Malignant Growth Society.15 The dataset was made of 1300 skin pictures with comparing class marks and injury divisions with 10 sore classifications, of which melanoma was one. The improved the best in class results on that dataset to 85.8% from the past consequences of 75.1%. No injury divisions were performed. Essentially, Majtner et. al introduced a strategy for skin sore classification, which consolidated CNN and handmade highlights, specifically RSurf highlights and neighbourhood double examples (LBP). They additionally contrasted the outcomes utilizing their strategy and the outcomes from the melanoma classification challenge, facilitated by the Global Skin Imaging Coordinated effort (ISIC) - the equivalent dataset we test our methodology on. Their announced exactness was somewhere in the range of 79.4 and 80.5% on the ISIC dataset.
An article in the diary Nature introduced a CNN trained start to finish framework where the sickness names were evaluated from pictures legitimately, like the procedures portrayed previously.4,16,17 The classification task was executed utilizing a restrictive dermatologist-named dataset which consists of 129,450 medical pictures, including 3,374 dermoscopy pictures. They used Google Net Origin v3 CNN engineering that was pre-prepared on around 1.28 million pictures (1,000 article classes) since 2014 ImageNet Enormous Scope Visual Acknowledgment Challenge.17,18,19 They utilized this pre-prepared system to prepare their skin sores picture dataset utilizing move learning. For one of the parcel, CNN accomplishes the precision of around 4036. 72.1% when contrasted with two dermatologists who achieve the exactness of 65.56% and 66% separately. To characterize melanoma pictures, utilized a straightforward design of convolutional neural system. They characterize the sore pictures without portioning or trimming the injuries. Basic pre-preparing methods were applied to the pictures like resizing the pictures to 256 x 256 and take away the intend to focus the information. They masterminded the marks of the pictures with the goal that the learning calculation does not get same name pictures successively. The CNN classifier has 17 layers which make all out 5 convolutional squares. The testing mistake, after the classifier is prepared, is 0.189. A technique dependent on Mahalanobis separation learning and compelled diagram regularized nonnegative grid factorization is proposed by.2 The methodology works by decreasing the elements of the highlights. This work depends on the possibility that highlight vectors with the dimensionality of two or three hundred may not function admirably with a classifier. On the off chance that the dimensionality is diminished, the exhibition of the classifier ought to be improved. Preparing information is utilized to learn Mahalanobis separation which is afterward utilized for nearby complex development.
METHODOLOGY:
Convolutional Neural Networks:
These networks combine two mathematical methods to create a third layer, which is known as a convolutional neural network (CNN). They have at least one convolution layer. Their use in image classification is very widespread and commonplace. Initially, they used a single convolution layer to represent the concept of CNNs. After that, researchers continued to improve on the initial approach, such as layer formation.
CNN Architecture:
The design of the system applied in this research is demonstrated in Fig.1. RGB (Red, Blue, Green) input of skin image is standardized using unit variance and zero mean. This normalized matrix is given as the input to the convolution layer. Whereas the Convolution layer is the initial layer which convolves 16 distinct kernels of 7x7 pixels to provide 16 distinct output channels. The extricated feature channels are given into pooling layer to reduce the dimension of the channels, or it could be described as sampling. The sampled channels were utilized as inputs for the next layers, and termed as fully connected layers.
This work used three-layer connected model to classify the images. Every consequent layer be able to lower the number of connected neurons (like 100, 50, 5). While comparing to the Deep convolutional neural network (DCNN), this work uses one convolution layer because there are small number of features are to be learned and it decrease the difficulty of the CNN and prevent overfitting.
Fig 1. Proposed System Flow
Confusion Matrix:
In a classification problem, the confusion matrix is a review of the predicted outcomes. With count values, the total number of accurate and inaccurate predictions is listed for each of the classes. While making predictions, a classification model's confusion matrix tells the methods where the model is getting confused. It provides us understanding not only into the faults being produced by a classifier but more significantly the types of faults that have been made.
Image Pre-processing:
For skin_recnn and skin_segnn, all images have been resized to 150X150 pixels. There are 8000 treated images in the skin_segnn input information. There are 8000 treated images and trimmed skin lesion images in the datasets of skin_recnn because it requires high-quality pictures for better understanding. The trimmed images from skin_segnn segmentation results are used in the validation and testing process to ensure the results are correct.
Segmentation Net:
Skin_segnn is a skin lesion segmentation neural network with the same U-Net architecture. Deep learning framework TensorFlow, Koras, and Theano are responsible for this. The initial images are the input image, and the segmentation masks are the output image. Convolution and pooling layers are included in skin_segnn's structure. As U-Net is designed, skip connections are used. To improve the network's strength and outcomes, this is beneficial.20
Every type of cancer is intensely risky if not devastating. Look at some of the data from the organisation for skin cancer21 to see how bad the situation is.
Approximately 2 to 4% of all malignancies in Asia are skin cancers.
Among Hispanics, skin cancer accounts for 4 to 5 percent of all malignancies.
About 1 to 2 percent of all malignancies in dark-skinned people are caused by skin cancer.
DATA SET:
The training data is downloaded from the Kaggle datasets. The dataset comprises of a total of 10,000 images stored in two folders. Total 10000 dermoscopic images contains 374 malignancy images, 1372 nevus images and 254 seborrheic keratosis images. The image has some essential data of patient including sex and age. The information about the data is stored in a data frame.6
Architecture:
A mobile-friendly model was the ultimate goal of this project. The only options for architecture we had were: `Mobilenet_v1`, `MobileNet_v2`, `M-Nasnet`, and `Shufflenet`. This paper focused on the Mobilenets family as they are readily available in the `Keras model zoo`. `MobileNetv2` is chosen as it much faster on mobile as compared to Mobilenet_v1.
Proposed Work:
For the extractor of functions, the base network was used, with the exception of all top classifiers. The last layer for extracting the feature from mobile networks is "world-wide average pooling," so all layers are eliminated. The model's training task was completed in two phases.22
Transfer learning:
In the base network, new layers were added at the top and the new levels trained in a frozen state while the base network was maintained.
Fine-tuning:
Here we unfreeze some of the layers of the base network for improved gains.
All the work done for this project is divided into three parts
· EDA: - An exploratory analysis of data and some pre-processing.
· Skin_cancer_model: -A detailed notebook for building and training model
· Model_convert: - Notebook for converting keras model to tflite model
Model Training:
The training data is given to a network with a bunch of 50 data. The network is trained around 20,000 repetitions. It is necessary to repeat the learning and testing process around 50 times in order to obtain unbiased results from the chosen training and test data. Python, TensorFlow, Scikit-learn, and Keras were used to develop the main algorithms. This is a collection of widely used machine learning algorithms written in Python that is called Scikit-learn. When it comes to deep learning models, TensorFlow is an open-source tool that allows you to create your own models. Keras operates above TensorFlow, allowing for effective investigation. 23 The architecture used in this work employs convolutional land pooling layers. Pooling is a method of nonlinear down sampling that is commonly used to improve pooling layers and then convolutional layers to eliminate unnecessary attributes and reduce the number of variables throughout the training to avoid overfitting. The paper makes extensive use of pooling (MaxPool2D). Following the convolutional and max pooling layers, flattening and fully connected (FC) layers are used to compress the multidimensional array into a two-dimensional array. The generated layer is based on the softmax method, which computes the chances of each class (skin cancer type).20,24
Training Mobilenetv2 Cnn Model:
In order to effectively train a pre-trained model, there are two major scenarios to study for skin lesion photos. On the other hand, these two scenarios depend on the data volume and similarity. For starters, if the dataset is small and the data similarity is low, it's a problem. Ideally, the initial k layers of a network should be frozen and the remaining n-k layers retrained. There is a high degree of data similarity in the second example. If the last layer of the network, also known as the dense layer or the completely linked layer, is removed, the entire network becomes frozen.
In order to train our classifier, we use Google's Keras, a deep learning API. A similar deep learning API, TensorFlow, is used in Keras.17 It allows CNN training to be done quickly and intuitively with Keras. 7 Fig. 2 illustrates the process of converting from a pre-trained mode to an App.
Fig.2: Conversion of Pre-trained model into an APP.
RESULT:
This work investigates the attributes acquired by the deep convolutional neural network for the classification process using multi-class classification process. The attributes are obtained and the datasets were divided into seven different categories. Based on that categories the data was trained and validated. The process of learning and testing is done repeatedly around 50 times and the average values of distinct groups of findings are described in the Table 1. It shows the classification result of the proposed model.
Table 1. Classification report
|
|
precision |
recall |
f1-score |
support |
|
akiec |
0.45 |
0.44 |
0.45 |
34.00 |
|
bcc |
0.63 |
0.76 |
0.69 |
49.00 |
|
bkl |
0.60 |
0.46 |
0.52 |
109.00 |
|
df |
0.00 |
0.00 |
0.00 |
11.00 |
|
mel |
0.58 |
0.32 |
0.41 |
93.00 |
|
nv |
0.85 |
0.96 |
0.90 |
540.00 |
|
vasc |
0.67 |
0.43 |
0.52 |
14.00 |
|
accuracy |
|
|
0.77 |
850.00 |
|
macro average |
0.54 |
0.48 |
0.50 |
850.00 |
|
weighted average |
0.74 |
0.77 |
0.75 |
850.00 |
Comparison between training loss and validation loss is given in Fig 3.
Top 2 Accuracy of the proposed model is given in Fig 4. Top 3 Accuracy of the proposed model is given in Fig 5. The Consolidated Loss, Top2 and Top3 accuracy is listed in Fig 6.
Fig . 5 Top 3 Accuracy
Fig. 6. Loss and Top 2 and Top 3 Accuracy
Comparison result among various models is given in table 2.
Table 2. Comparison of similar models
|
Title |
Year |
Model |
Algorithm |
|
Detection of skin cancer |
2018 |
CNN |
Snake algorithm SVM |
|
Machine learning for skin cancer detection on mobile |
2019 |
CNN, MobileNet V2 |
Confusion matrix |
|
Mobile Dermoscopy application for skin cancer detection |
2018 |
CNN. MobileNet V2 |
Confusion matrix |
|
Multi class skin disease classification using skin cancer |
2016 |
CNN, Histogram |
Confusion matrix |
|
Various image enhancement techniques for skin cancer detection |
2015 |
CNN, Segmentation, Histogram |
Confusion Matrix, SVM |
|
Automatically early detection of skin cancer |
2016 |
CNN, Segmentation, Graphical representation of image pixels. |
Confusion matrix |
|
Skin Cancer detection using neural networks |
2018 |
CNN,MobileNet V2 |
Feed-forward, Confusion matrix |
CONCLUSION:
ACKNOWLEDGEMENT:
Authors acknowledge the immense help received from the scholars whose articles are cited and included in references of this manuscript. The authors are also grateful to authors / editors / publishers of all those articles, journals and books from where the literature for this article has been reviewed and discussed.
CONFLICT OF INTEREST:
The Author(s) declare(s) that there is no conflict of interest.
INDIVIDUAL AUTHOR’S CONTRIBUTION:
Author 1, 2 and 5 designed and performed the experiments, derived the models and analysed the data. Author 2, 3 and 4 took the lead in writing the manuscript with input from all authors. Author 3, 4 and 5 were involved in planning and supervised the work. All authors provided critical feedback and helped shape the research, analysis and manuscript.
REFERENCES:
1. Sonam S, Rekha B. Study of demographic profile of skin tumors in a tertiary care hospital. Int J Cur Res Rev. 2014; 06(16):24-28
2. Tavakolpour S, Daneshpazhooh M, Mahmoudi H. Skin cancer: Genetics, immunology, treatments, and psychological care. In: Cancer Genetics and Psychotherapy. Cham: Springer International Publishing; 2017. p. 851–934. doi: 10.1007/978-3-319-64550-6_18
3. Maglogiannis I, Doukas CN. Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans Inf Technol Biomed. 2009;13(5):721–33. doi: 10.1109/TITB.2009.2017529.
4. Dai X, Spasic I, Meyer B, Chapman S, Andres F. Machine learning on mobile: An on-device inference app for skin cancer detection. In: 2019 Fourth International Conference on Fog and Mobile Edge Computing (FMEC). IEEE; 2019. doi: 10.1109/FMEC.2019.8795362
5. Janda M, Youl PH, Lowe JB, Elwood M, Ring IT, Aitken JF. Attitudes and intentions in relation to skin checks for early signs of skin cancer. Prev Med. 2004;39(1):11–8. doi: 10.1016/j.ypmed.2004.02.019
6. Hoffmann K, Gambichler T, Rick A, Kreutz M, Anschuetz M, Grünendick T, et al. Diagnostic and neural analysis of skin cancer (DANAOS). A multicentre study for collection and computer-aided analysis of data from pigmented skin lesions using digital dermoscopy. Br J Dermatol. 2003;149(4):801–9. doi: 10.1046/j.1365-2133.2003.05547.x
7. Sharon, J. Jenifa, and L. Jani Anbarasi. "Diagnosis of DCM and HCM heart diseases using neural network function." International Journal of Applied Engineering Research 13.10 (2018): 8664-8668. doi: 10.1007/978-981-16-1244-2_7
8. Prassanna J. Rahim R. Bagyalakshmi K. Manikandan R. Patan R. Effective Use of Deep Learning and Image Processing for Cancer Diagnosis. Kose U., Alzubi J. (eds) Deep Learning for Cancer Diagnosis. Studies in Computational Intelligence, vol 908. Springer. 2021 doi.org:10.1007/978-981-15-6321-8_9
9. Sharon, J. Jenifa, L. Jani Anbarasi, and Benson Edwin Raj. "DPSO-FCM based segmentation and Classification of DCM and HCM Heart Diseases." 2018 Fifth HCT Information Technology Trends (ITT). IEEE, 2018. doi: 10.1109/CTIT.2018.8649511.
10. Jawahar, Malathy, et al. "Diagnosis of covid-19 using optimized pca based local binary pattern features." International Journal of Current Research and Review 13.6 Special Issue (2021). doi:http://dx.doi.org/10.31782/IJCRR.2021.SP171
11. Jawahar, Malathy, et al. "Diabetic Foot Ulcer Segmentation using Color Space Models." 2020 5th International Conference on Communication and Electronics Systems (ICCES). IEEE, 2020. doi: 10.1109/ICCES48766.2020.9138024
12. J. Prassanna, Jani L. Anbarasi, R. Prabhakaran, Kanchana Devi V, Rajiv Vincent, R. Manikandan, Ambeshwar Kumar2 ,"Enhanced Safe and Secure Distributed Healthcare System Through Efficient Duplication Detection Approach." Int J Cur Res Rev| Vol 12.24 (2020): 101. doi: 10.31782/IJCRR.2020.122407
13. Modigari Narendra, L. Jani Anbarasi, S. Graceline Jasmine, J. Prassanna and R. Prabhakaran, “Breast Cancer Detection Using Histology Images: A Survey”, Jour of Adv Research in Dynamical & Control Systems, Vol. 12, 07-Special Issue, 2020. doi: 10.5373/JARDCS/V12SP7/20202140.
14. Reghukumar, Arathi, et al. "Vision Based Segmentation and Classification of Cracks Using Deep Neural Networks." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 29.Supp01 (2021): 141-156. doi:10.1142/S0218488521400080
15. Kawahara J, Hamarneh G. Multi-resolution-tract CNN with hybrid pretrained and skin-lesion trained layers. In: Machine Learning in Medical Imaging. Cham: Springer International Publishing; 2016. p. 164–71. doi: 10.1007/978-3-319-47157-0_20
16. Sunil G, Vinod M, Pr R. Exfoliative toxin mediated staphylococcal scalded skin syndrome: A review. Int J Curr Res Rev. 2020;12(22):86–90. doi:10.31782/IJCRR.2020.122212
17. Ech-Cherif A, Misbhauddin M, Ech-Cherif M. Deep neural network based mobile dermoscopy application for triaging skin cancer detection. In: 2019 2nd International Conference on Computer Applications & Information Security (ICCAIS). IEEE; 2019. doi: 10.1109/CAIS.2019.8769517
18. Chiem A, Al-Jumaily A, Khushaba RN. A novel hybrid system for skin lesion detection. In: 2007 3rd International Conference on Intelligent Sensors, Sensor Networks and Information. IEEE; 2007.doi:10.1109/ISSNIP.2007.4496905
19. Lau HT, Al-Jumaily A. Automatically early detection of skin cancer: Study based on nueral netwok classification. In: 2009 International Conference of Soft Computing and Pattern Recognition. IEEE; 2009.doi: 10.1109/SoCPaR.2009.80
20. Kalra M, Kumar S. Various image enhancement techniques for skin cancer detection using mobile app. In: 2015 International Conference on Computer, Communication and Control (IC4). IEEE; 2015.doi: 10.1109/IC4.2015.7375681
21. Skin Cancer Facts & Statistics, accessed 10 January 2021, < https://www.skincancer.org/skin-cancer-information/skin-cancer-facts/>
22. Nasr-Esfahani E, Samavi S, Karimi N, Soroushmehr SMR, Jafari MH, Ward K, et al. Melanoma detection by analysis of clinical images using convolutional neural network. Annu Int Conf IEEE Eng Med Biol Soc. 2016;2016:1373–6.doi: 10.1109/EMBC.2016.7590963
23. TensorFlow (TF) to CoreML Converter [Accessed: 5 Feb.2019]. Available from: https://github.com/tf-coreml/tfcoreml.
24. Nirali A, Shah S, Prajapati S, Hansa G. Histomorphological spectrum of skin adnexal tumors at a tertiary care hospital - a retrospective study. Int J Cur Res Rev. 2016; 08(04):13-18. doi: ijmsph.2020.02036202011032020
Received on 26.03.2021 Modified on 29.10.2021
Accepted on 02.02.2022 © RJPT All right reserved
Research J. Pharm. and Tech 2022; 15(10):4519-4525.
DOI: 10.52711/0974-360X.2022.00758